预示着在不同时间尺度上作用的软件化,可编程网络控制和使用作用的全包装控制器的使用,作为下一代蜂窝网络发展的关键驱动力。这些技术已经培养了新设计的智能数据驱动的解决方案,用于管理大量各种蜂窝功能,基本上不可能在传统上闭合的蜂窝体系结构中实施。尽管行业对人工智能(AI)和机器学习(ML)解决方案具有明显的兴趣,该解决方案是对无线电访问网络(RAN)的闭环控制,并且该领域的几项研究工作远非主流,但仍然是一个复杂的操作,而且经常被忽略。在本文中,我们讨论了如何为开放式RAN的智能闭环控制设计AI/ML解决方案,从而根据具有高性能记录的示例解决方案提供指南和见解。然后,我们展示如何通过OpenRan Gym在O-RAN近实时RAN智能控制器(RIC)上实例化这些解决方案,Openran Gym是第一个用于数据驱动的O-RAN实验的公共可用工具箱。我们展示了一个由OpenRan Gym开发的XAPP的用例,并在蜂窝网络上进行了测试,其中有7个基站和42位用户部署在Colosseum Wireless网络模拟器上。我们的演示表明,位于Openran的XAPP开发环境的高度灵活性,该环境与部署方案和交通需求无关。
translated by 谷歌翻译
开放式无线电访问网络(RAN)体系结构将在下一代蜂窝网络中启用互操作性,开放性和可编程数据驱动控制。但是,开发和测试有效的解决方案,这些解决方案跨越了异质的细胞部署和量表,并在如此多样化的环境中优化网络性能是一项复杂的任务,这是一项复杂的任务,仍然在很大程度上没有探索。在本文中,我们介绍了OpenRan Gym,这是一个统一,开放和O-Ran符合的实验工具箱,用于数据收集,设计,原型设计和测试下一代Open RAN Systems的端到端数据驱动的控制解决方案。 OpenRan Gym扩展并结合了一个独特的解决方案,几个软件框架用于数据收集统计和控制控制,以及轻巧的O-Ran近实时RAN智能控制器(RIC)量身定制,可在实验性无线平台上运行。我们首先概述了OpenRan Gym的各种建筑组件,并描述了如何按大规模收集数据和设计,训练和测试人工智能和机器学习O-Ran-Commiate应用程序(XAPP)。然后,我们详细描述了如何在SoftWarized Rans上测试开发的XAPP,并提供了一个使用OpenRan Gym开发的两个XAPP的示例,这些XAPP用于控制一个具有7个基站的网络,并在奥马斗马会测试中部署了42个用户。最后,我们展示了如何通过罗马竞技场上的Openran Gym开发的解决方案,可以将其导出到现实世界中的异质无线平台,例如Arena Testbed以及PAWR计划的粉末和宇宙平台。 OpenRan Gym及其软件组件是开源的,并且对研究社区公开可用。
translated by 谷歌翻译
尽管开放式运输所带来的新机遇,但基于ML的网络自动化的进步已经缓慢,主要是因为大规模数据集和实验测试基础设施的不可用。这减缓了实际网络上的深度加强学习(DRL)代理的开发和广泛采用,延迟了智能和自主运行控制的进展。在本文中,我们通过提出用于开放式RAN基于DRL基闭环控制的设计,培训,测试和实验评估的实用解决方案和软件管道来解决这些挑战。我们介绍了Colo-RAN,这是一个具有软件定义的无线电循环的第一个公开的大型O-RAN测试框架。在ColoSseum无线网络仿真器的规模和计算能力上,Colo-RAN使用O-RAN组件,可编程基站和“无线数据厂”来实现ML研究。具体而言,我们设计并开发三种示例性XApp,用于基于DRL的RAN切片,调度和在线模型培训,并评估其在具有7个软化基站和42个用户的蜂窝网络上的性能。最后,我们通过在竞技场上部署一个室内可编程测试平台来展示Colo-RAN到不同平台的可移植性。我们的一类大型评估的广泛结果突出了基于DRL的自适应控制的益处和挑战。他们还提供关于无线DRL管道的开发的见解,从数据分析到DRL代理商的设计,以及与现场训练相关的权衡。 Colo-RAN和收集的大型数据集将公开向研究界公开提供。
translated by 谷歌翻译
Colorsseum是一种开放式和公开可用的大型无线无线测试,可通过虚拟化和软载波形和协议堆栈进行实验研究,在完全可编程的“白盒子”平台上。通过256最先进的软件定义的无线电和巨大的通道仿真器核心,罗马斗兽场几乎可以模拟任何方案,在各种部署和渠道条件下,可以在规模上进行设计,开发和测试解决方案。通过有限脉冲响应滤波器通过高保真FPGA的仿真再现这些罗马孔射频场景。过滤器模拟所需的无线通道的抽头,并将它们应用于无线电节点生成的信号,忠实地模拟现实世界无线环境的条件。在本文中,我们将罗马斗兽场介绍为测试楼,这是第一次向研究界开放。我们描述了罗马斗兽场的建筑及其实验和仿真能力。然后,我们通过示例性用例证明了罗马斗兽场对实验研究的有效性,包括频谱共享和无人空中车辆场景的普遍用途用例,包括普遍的无线技术(例如,蜂窝和Wi-Fi)。斗兽索斗兽场未来更新的路线图总结了这篇论文。
translated by 谷歌翻译
Simulating quantum channels is a fundamental primitive in quantum computing, since quantum channels define general (trace-preserving) quantum operations. An arbitrary quantum channel cannot be exactly simulated using a finite-dimensional programmable quantum processor, making it important to develop optimal approximate simulation techniques. In this paper, we study the challenging setting in which the channel to be simulated varies adversarially with time. We propose the use of matrix exponentiated gradient descent (MEGD), an online convex optimization method, and analytically show that it achieves a sublinear regret in time. Through experiments, we validate the main results for time-varying dephasing channels using a programmable generalized teleportation processor.
translated by 谷歌翻译
Electricity prices in liberalized markets are determined by the supply and demand for electric power, which are in turn driven by various external influences that vary strongly in time. In perfect competition, the merit order principle describes that dispatchable power plants enter the market in the order of their marginal costs to meet the residual load, i.e. the difference of load and renewable generation. Many market models implement this principle to predict electricity prices but typically require certain assumptions and simplifications. In this article, we present an explainable machine learning model for the prices on the German day-ahead market, which substantially outperforms a benchmark model based on the merit order principle. Our model is designed for the ex-post analysis of prices and thus builds on various external features. Using Shapley Additive exPlanation (SHAP) values, we can disentangle the role of the different features and quantify their importance from empiric data. Load, wind and solar generation are most important, as expected, but wind power appears to affect prices stronger than solar power does. Fuel prices also rank highly and show nontrivial dependencies, including strong interactions with other features revealed by a SHAP interaction analysis. Large generation ramps are correlated with high prices, again with strong feature interactions, due to the limited flexibility of nuclear and lignite plants. Our results further contribute to model development by providing quantitative insights directly from data.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译
We examined multiple deep neural network (DNN) architectures for suitability in predicting neurotransmitter concentrations from labeled in vitro fast scan cyclic voltammetry (FSCV) data collected on carbon fiber electrodes. Suitability is determined by the predictive performance in the "out-of-probe" case, the response to artificially induced electrical noise, and the ability to predict when the model will be errant for a given probe. This work extends prior comparisons of time series classification models by focusing on this specific task. It extends previous applications of machine learning to FSCV task by using a much larger data set and by incorporating recent advancements in deep neural networks. The InceptionTime architecture, a deep convolutional neural network, has the best absolute predictive performance of the models tested but was more susceptible to noise. A naive multilayer perceptron architecture had the second lowest prediction error and was less affected by the artificial noise, suggesting that convolutions may not be as important for this task as one might suspect.
translated by 谷歌翻译
Dimensionality reduction has become an important research topic as demand for interpreting high-dimensional datasets has been increasing rapidly in recent years. There have been many dimensionality reduction methods with good performance in preserving the overall relationship among data points when mapping them to a lower-dimensional space. However, these existing methods fail to incorporate the difference in importance among features. To address this problem, we propose a novel meta-method, DimenFix, which can be operated upon any base dimensionality reduction method that involves a gradient-descent-like process. By allowing users to define the importance of different features, which is considered in dimensionality reduction, DimenFix creates new possibilities to visualize and understand a given dataset. Meanwhile, DimenFix does not increase the time cost or reduce the quality of dimensionality reduction with respect to the base dimensionality reduction used.
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译